28 research outputs found

    SOA-Based Distributed System in Online Transaction Processing

    Get PDF
    Service Oriented Architecture (SOA) with transactional workflow support is a state- of- the- art architectural style for constructing enterprise application. In this research, rate of progress activities raise distributed service in a coordinated manner, using transaction context propagating message, coordination protocol and compensation logic. We reviewed the past, present and future of transaction processing and transaction integrity. Most of the challenges and requirement that led to the development and evolution of transaction processing system are still applicable today and recently, we have some intriguing developments. We take an explorative approach to probe the theoretical and implementational feasibility of managing transaction in the web service world

    Concealment Conserving the Data Mining of Groups & Individual

    Get PDF
    We present an overview of privacy preserving data mining, one of the most popular directions in the data mining research community. In the first part of the chapter, we presented approaches that have been proposed for the protection of either the sensitive data itself in the course of data mining or the sensitive data mining results, in the context of traditional (relational) datasets. Following that, in the second part of the chapter, we focused our attention on one of the most recent as well as prominent directions in privacy preserving data mining: the mining of user mobility data. Although still in its infancy, privacy preserving data mining of mobility data has attracted a lot of research attention and already counts a number of methodologies both with respect to sensitive data protection and to sensitive knowledge hiding. Finally, in the end of the chapter, we provided some roadmap along the field of privacy preserving mobility data mining as well as the area of privacy preserving data mining at large

    Deep convolutional neural network to predict ground water level

    Get PDF
    In contrast to the atmosphere and fresh surface water, which can only briefly store water, the natural water cycle may use groundwater as a “reservoir” that stores water for extended periods. Even though there is a considerable degree of variation and complexity in the subsurface environment, there is a minimal availability of data from the field. Both of these challenges were faced by those who used models that were based on actual reality. Statistical modelling gradually improved the accuracy of the model’s calibration. Groundwater has become an increasingly important resource for supplying the water requirements of a rising global population. The fact that there is such a large stockpile allows it to be used once again, even during dry seasons or droughts. This article presents a deep convolutional neural network-based model for predicting groundwater levels. As part of the experimental setup, 174 satellite pictures of groundwater are included in the input data set. Images are preprocessed using the CLAHE method. The CNN, SVM, and AdaBoost methods make up the classification model. The results have shown that CNN can classify things correctly 98.5 per cent of the time. Precision and Recall rate of Deep CNN is also better for ground water image classification

    The prediction of sleep quality using wearable-assisted smart health monitoring systems based on statistical data

    Get PDF
    The technology, which plays a significant role in our lives, has made it possible for many of the appliances and gadgets we use on a daily basis to be monitored and controlled remotely. Health and fitness data is collected by wearable devices attached to patients' bodies. A number of parties could benefit from this technology, including doctors, insurers, and health providers. This technology, including smartwatches, smart ring, smart cloth wristbands, and GPS shoes, is frequently used for fitness and wellness since it allows users to track their day-to-day health. Devices that compute the sleep characteristics by storing sleep movements fall within the category of wearables worn on the wrist. In order to lead a healthy lifestyle, sleep is crucial. Inadequate sleep can harm one's physical, mental, and emotional well-being and increase the risk of developing a number of ailments, including stress, heart disease, high blood pressure, insulin resistance, and other conditions. Deep learning (DL) models have recently been used to forecast sleep-quality based on wearables information from the awake hours. Deep learning has been demonstrated to be capable of predicting sleep efficiency based on wearable data obtained during awake periods. In this regard, this study creates a novel deep learning model for wearables-enabled smart health monitoring system (DLM-WESHMS) for the prediction of sleep quality. The wearables are initially able to collect data linked to sleep-activity using the described DLM-WESHMS approach. The data is then put through pre-processing to create a standard format. Using the DLM-WESHMS, sleep quality is predicted using the deep belief network (DBN) model. The DBN model uses the auto-encoders algorithm (AEA) to predict popularity, which improves the accuracy of its predictions of sleep quality. The experimental outcomes of the DLM-WESHMS approach are investigated using several metrics. The DLM-WESHMS model performs significantly better than other models, according to a thorough comparison analysis

    Billet optimization for steering knuckle using Taguchi Methodology

    No full text
    In the present scenario of competitive production, Computer Aided Engineering (CAE) techniques have been applied with great success in metal forming research especially in the cold, warm and hot forging areas for meeting customer expectations. There is growing demand for more efficient and economic manufacturing process to reduce production cost, increase productivity, and reduce lead time and to improve product quality. The traditional forging die design procedure depends upon costly, tedious and time consuming shop floor trials before achieving final acceptable product. More difficulty is experienced when design requirements are stringent and die profiles are complicated. To overcome this problem, computer based simulation technology has come and brought some benefits for optimizing forging processes. In this paper, Taguchi optimization methodology is applied to optimize design parameters for steering knuckle die. Taguchi offers a simple and systematic approach to obtain optimal settings of the design parameters for steering knuckle die. The design parameters evaluated are flash thickness, flash land and billet shape each at three levels. To obtain the results, the forging process was modeled (Catia 3D modeling software), simulated (Deform 3D forging simulation software) and examined using Taguchi’s L9 orthogonal array

    Defect Prediction Technology in Software Engineering Based on Convolutional Neural Network

    No full text
    Software defect prediction has become a significant study path in the field of software engineering in order to increase software reliability. Program defect predictions are being used to assist developers in identifying potential problems and optimizing testing resources to enhance program dependability. As a consequence of this strategy, the number of software defects may be predicted, and software testing resources are focused on the software modules with the most problems, allowing the defects to be addressed as soon as feasible. The author proposes a research method of defect prediction technology in software engineering based on convolutional neural network. Most of the existing defect prediction methods are based on the number of lines of code, module dependencies, stack reference depth, and other artificially extracted software features for defect prediction. Such methods do not take into account the underlying semantic features in software source code, which may lead to unsatisfactory prediction results. The author uses a convolutional neural network to mine the semantic features implicit in the source code and use it in the task of software defect prediction. Empirical studies were conducted on 5 software projects on the PROMISE dataset and using the six evaluation indicators of Recall, F1, MCC, pf, gm, and AUC to verify and analyze the experimental results showing that the AUC values of the items varied from 0.65 to 0.86. Obviously, software defect prediction experimental results obtained using convolutional neural networks are still ideal. Defect prediction model in software engineering based on convolutional neural network has high prediction accuracy

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches
    corecore